Radon–Sobolev Variational Auto-Encoders

نویسندگان

چکیده

The quality of generative models (such as Generative adversarial networks and Variational Auto-Encoders) depends heavily on the choice a good probability distance. However some popular metrics like Wasserstein or Sliced distances, Jensen–Shannon divergence, Kullback–Leibler lack convenient properties such (geodesic) convexity, fast evaluation so on. To address these shortcomings, we introduce class distances that have built-in convexity. We investigate relationship with known paradigms (sliced – synonym for Radon reproducing kernel Hilbert spaces, energy distances). are shown to possess implementations included in an adapted Auto-Encoder termed Radon–Sobolev (RS-VAE) which produces high results standard datasets.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Variational Recurrent Auto-Encoders

In this paper we propose a model that combines the strengths of RNNs and SGVB: the Variational Recurrent Auto-Encoder (VRAE). Such a model can be used for efficient, large scale unsupervised learning on time series data, mapping the time series data to a latent vector representation. The model is generative, such that data can be generated from samples of the latent space. An important contribu...

متن کامل

Variational Graph Auto-Encoders

Figure 1: Latent space of unsupervised VGAE model trained on Cora citation network dataset [1]. Grey lines denote citation links. Colors denote document class (not provided during training). Best viewed on screen. We introduce the variational graph autoencoder (VGAE), a framework for unsupervised learning on graph-structured data based on the variational auto-encoder (VAE) [2, 3]. This model ma...

متن کامل

Hyperspherical Variational Auto-Encoders

The Variational Auto-Encoder (VAE) is one of the most used unsupervised machine learning models. But although the default choice of a Gaussian distribution for both the prior and posterior represents a mathematically convenient distribution often leading to competitive results, we show that this parameterization fails to model data with a latent hyperspherical structure. To address this issue w...

متن کامل

Improving Variational Auto-Encoders using Householder Flow

Variational auto-encoders (VAE) are scalable and powerful generative models. However, the choice of the variational posterior determines tractability and flexibility of the VAE. Commonly, latent variables are modeled using the normal distribution with a diagonal covariance matrix. This results in computational efficiency but typically it is not flexible enough to match the true posterior distri...

متن کامل

Salience Estimation via Variational Auto-Encoders for Multi-Document Summarization

We propose a new unsupervised sentence salience framework for Multi-Document Summarization (MDS), which can be divided into two components: latent semantic modeling and salience estimation. For latent semantic modeling, a neural generative model called Variational Auto-Encoders (VAEs) is employed to describe the observed sentences and the corresponding latent semantic representations. Neural va...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neural Networks

سال: 2021

ISSN: ['1879-2782', '0893-6080']

DOI: https://doi.org/10.1016/j.neunet.2021.04.018